64 research outputs found

    MorpheuS: Automatic music generation with recurrent pattern constraints and tension profiles

    Get PDF
    Generating music with long-term structure is one of the main challenges in the field of automatic composition. This article describes MorpheuS, a music generation system. MorpheuS uses state-of-the-art pattern detection techniques to find repeated patterns in a template piece. These patterns are then used to constrain the generation process for a new polyphonic composition. The music generation process is guided by an efficient optimization algorithm, variable neighborhood search, which uses a mathematical model of tonal tension to derive its objective function. The ability to generate music according to a tension profile could be useful in a game or film music context. Pieces generated by MorpheuS have been performed in live concerts.This project is funded in part by the European Union’s Horizon 2020 research and innovation programme under grant agreement No 658914

    PreBit -- A multimodal model with Twitter FinBERT embeddings for extreme price movement prediction of Bitcoin

    Full text link
    Bitcoin, with its ever-growing popularity, has demonstrated extreme price volatility since its origin. This volatility, together with its decentralised nature, make Bitcoin highly subjective to speculative trading as compared to more traditional assets. In this paper, we propose a multimodal model for predicting extreme price fluctuations. This model takes as input a variety of correlated assets, technical indicators, as well as Twitter content. In an in-depth study, we explore whether social media discussions from the general public on Bitcoin have predictive power for extreme price movements. A dataset of 5,000 tweets per day containing the keyword `Bitcoin' was collected from 2015 to 2021. This dataset, called PreBit, is made available online. In our hybrid model, we use sentence-level FinBERT embeddings, pretrained on financial lexicons, so as to capture the full contents of the tweets and feed it to the model in an understandable way. By combining these embeddings with a Convolutional Neural Network, we built a predictive model for significant market movements. The final multimodal ensemble model includes this NLP model together with a model based on candlestick data, technical indicators and correlated asset prices. In an ablation study, we explore the contribution of the individual modalities. Finally, we propose and backtest a trading strategy based on the predictions of our models with varying prediction threshold and show that it can used to build a profitable trading strategy with a reduced risk over a `hold' or moving average strategy.Comment: 21 pages, submitted preprint to Elsevier Expert Systems with Application

    Constructing Time-Series Momentum Portfolios with Deep Multi-Task Learning

    Full text link
    A diversified risk-adjusted time-series momentum (TSMOM) portfolio can deliver substantial abnormal returns and offer some degree of tail risk protection during extreme market events. The performance of existing TSMOM strategies, however, relies not only on the quality of the momentum signal but also on the efficacy of the volatility estimator. Yet many of the existing studies have always considered these two factors to be independent. Inspired by recent progress in Multi-Task Learning (MTL), we present a new approach using MTL in a deep neural network architecture that jointly learns portfolio construction and various auxiliary tasks related to volatility, such as forecasting realized volatility as measured by different volatility estimators. Through backtesting from January 2000 to December 2020 on a diversified portfolio of continuous futures contracts, we demonstrate that even after accounting for transaction costs of up to 3 basis points, our approach outperforms existing TSMOM strategies. Moreover, experiments confirm that adding auxiliary tasks indeed boosts the portfolio's performance. These findings demonstrate that MTL can be a powerful tool in finance

    A Novel Interface for the Graphical Analysis of Music Practice Behaviors

    Get PDF
    Practice is an essential part of music training, but critical content-based analyses of practice behaviors still lack tools for conveying informative representation of practice sessions. To bridge this gap, we present a novel visualization system, the Music Practice Browser, for representing, identifying, and analysing music practice behaviors. The Music Practice Browser provides a graphical interface for reviewing recorded practice sessions, which allows musicians, teachers, and researchers to examine aspects and features of music practice behaviors. The system takes beat and practice segment information together with a musical score in XML format as input, and produces a number of different visualizations: Practice Session Work Maps give an overview of contiguous practice segments; Practice Segment Arcs make evident transitions and repeated segments; Practice Session Precision Maps facilitate the identifying of errors; Tempo-Loudness Evolution Graphs track expressive variations over the course of a practice session. We then test the new system on practice sessions of pianists of varying levels of expertise ranging from novice to expert. The practice patterns found include Drill-Correct, Drill-Smooth, Memorization Strategy, Review and Explore, and Expressive Evolution. The analysis reveals practice patterns and behavior differences between beginners and experts, such as a higher proportion of Drill-Smooth patterns in expert practice

    Tension ribbons: Quantifying and visualising tonal tension

    Get PDF
    Tension is a complex multidimensional concept that is not easily quantified. This research proposes three methods for quantifying aspects of tonal tension based on the spiral array, a model for tonality. The cloud diameter measures the dispersion of clusters of notes in tonal space; the cloud momentum measures the movement of pitch sets in the spiral array; finally, tensile strain measures the distance between the local and global tonal context. The three methods are implemented in a system that displays the results as tension ribbons over the music score to allow for ease of interpretation. All three methods are extensively tested on data ranging from small snippets to phrases with the Tristan chord and larger sections from Beethoven and Schubert piano sonatas. They are further compared to results from an existing empirical experiment

    Video2Music: Suitable Music Generation from Videos using an Affective Multimodal Transformer model

    Full text link
    Numerous studies in the field of music generation have demonstrated impressive performance, yet virtually no models are able to directly generate music to match accompanying videos. In this work, we develop a generative music AI framework, Video2Music, that can match a provided video. We first curated a unique collection of music videos. Then, we analysed the music videos to obtain semantic, scene offset, motion, and emotion features. These distinct features are then employed as guiding input to our music generation model. We transcribe the audio files into MIDI and chords, and extract features such as note density and loudness. This results in a rich multimodal dataset, called MuVi-Sync, on which we train a novel Affective Multimodal Transformer (AMT) model to generate music given a video. This model includes a novel mechanism to enforce affective similarity between video and music. Finally, post-processing is performed based on a biGRU-based regression model to estimate note density and loudness based on the video features. This ensures a dynamic rendering of the generated chords with varying rhythm and volume. In a thorough experiment, we show that our proposed framework can generate music that matches the video content in terms of emotion. The musical quality, along with the quality of music-video matching is confirmed in a user study. The proposed AMT model, along with the new MuVi-Sync dataset, presents a promising step for the new task of music generation for videos
    • …
    corecore